- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources5
- Resource Type
-
0004100000000000
- More
- Availability
-
41
- Author / Contributor
- Filter by Author / Creator
-
-
Glass, James (5)
-
Cox, David (4)
-
Feris, Rogerio (3)
-
Hansen, Jacob (3)
-
Kang, Junmo (3)
-
Karlinsky, Leonid (3)
-
Luo, Hongyin (3)
-
Ritter, Alan (3)
-
Zhu, Yada (2)
-
Bhati, Saurabhchand (1)
-
Chang, Shiyu (1)
-
Cho, Kyunghyun (1)
-
Chuang, Yung-Sung (1)
-
Gimpel, Kevin (1)
-
Harwath, David (1)
-
He, Tianxing (1)
-
Kim, Yoon (1)
-
Kumar, Sachin (1)
-
Lai, Cheng-I Jeff (1)
-
Livescu, Karen (1)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available April 25, 2026
-
Kang, Junmo; Luo, Hongyin; Zhu, Yada; Hansen, Jacob; Glass, James; Cox, David; Ritter, Alan; Feris, Rogerio; Karlinsky, Leonid (, ACL 2024 (Findings))
-
Kang, Junmo; Luo, Hongyin; Zhu, Yada; Hansen, Jacob; Glass, James; Cox, David; Ritter, Alan; Feris, Rogerio; Karlinsky, Leonid (, ACL 2024 (Findings))
-
He, Tianxing; Zhang, Jingyu; Wang, Tianle; Kumar, Sachin; Cho, Kyunghyun; Glass, James; Tsvetkov, Yulia (, ACL: Annual Meeting of the Association for Computational Linguistics)In this work, we explore a useful but often neglected methodology for robustness analysis of text generation evaluation metrics: stress tests with synthetic data. Basically, we design and synthesize a wide range of potential errors and check whether they result in a commensurate drop in the metric scores. We examine a range of recently proposed evaluation metrics based on pretrained language models, for the tasks of open-ended generation, translation, and summarization. Our experiments reveal interesting insensitivities, biases, or even loopholes in existing metrics. For example, we find that BERTScore is confused by truncation errors in summarization, and MAUVE (built on top of GPT-2) is insensitive to errors at the beginning or middle of generations. Further, we investigate the reasons behind these blind spots and suggest practical workarounds for a more reliable evaluation of text generation. We have released our code and data at https://github.com/cloudygoose/blindspot_nlg.more » « less
-
Lai, Cheng-I Jeff; Shi, Freda; Peng, Puyuan; Kim, Yoon; Gimpel, Kevin; Chang, Shiyu; Chuang, Yung-Sung; Bhati, Saurabhchand; Cox, David; Harwath, David; et al (, IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU))
An official website of the United States government

Full Text Available